Goto

Collaborating Authors

 turbocharge ai


JPMorgan's CIO Has Championed A Data Platform That Turbocharges AI

#artificialintelligence

JPMorgan Chase sees artificial intelligence (AI) as critical to its future success. And the mega-bank has a big advantage over many of its smaller rivals: the massive amount of data it gathers from sources such as the 50% of U.S. households with which it has some form of relationship and the $6 trillion worth of payment flows it handles daily. But until recently, identifying and pulling in relevant data to train AI models was taking up around 60% of the time of the bank's growing army of data scientists. That was an inefficient use of an expensive and relatively scarce resource. Now a new data platform the bank has developed, called OmniAI, is helping it to get relevant data into its models much faster.


JPMorgan's CIO Has Championed A Data Platform That Turbocharges AI

#artificialintelligence

JPMorgan Chase sees artificial intelligence (AI) as critical to its future success. And the mega-bank has a big advantage over many of its smaller rivals: the massive amount of data it gathers from sources such as the 50% of U.S. households with which it has some form of relationship and the $6 trillion worth of payment flows it handles daily. But until recently, identifying and pulling in relevant data to train AI models was taking up around 60% of the time of the bank's growing army of data scientists. That was an inefficient use of an expensive and relatively scarce resource. Now a new data platform the bank has developed, called OmniAI, is helping it to get relevant data into its models much faster.


Bill Gates just backed a chip startup that uses light to turbocharge AI

#artificialintelligence

Advances in computing, from speedier processors to cheaper data storage, helped ignite the new AI era. Now demand for even faster, more energy-efficient AI models is driving a wave of innovation in semiconductors.


Microsoft* Turbocharges AI with Intel FPGAs. You Can, Too.

#artificialintelligence

Today, Microsoft* announced a public preview of Azure Machine Learning Hardware Accelerated Models powered by Project Brainwave*, a new AI inferencing service. The service uses Intel Arria 10 FPGAs, configured as "soft DNN processing units" highly-tuned to the ResNet-50 image recognition model, to provide extraordinary throughput levels. Microsoft calls it "real time AI." One year ago, Microsoft Azure CTO, Mark Russinovich, described their plan to build the Azure Cloud Services infrastructure with an FPGA in every node. Instead of creating node pools with specialized hardware accelerators for the wide-ranging workloads that are deployed in Azure, the Microsoft team went with the flexibility of FPGAs, which can be reconfigured to provide hardware acceleration perfectly aligned to nearly any task.